真实世界的文本应用程序通常涉及组成广泛的文本控制操作,例如编辑文本W.R.T.属性,操纵关键字和结构,并生成所需属性的新文本。事先的工作通常会学习/芬太尼语言模型(LM)以执行操作的个人或特定子集。最近的研究以插件方式研究了合并操作,通常在复杂序列空间中以昂贵的搜索或优化进行了研究。本文提出了一种新的有效方法,用于在紧凑的文本潜在空间中进行可复合的文本操作。文本潜在矢量的低维度和不同性使我们能够基于给定的任意插入运算符(例如属性分类器)基于普通微分方程(ODE)开发有效的采样器。通过通过有效的适应性将预告片的LMS(例如GPT2)连接到潜在空间,然后我们将采样向量解码为所需的文本序列。灵活的方法允许使用来自不同域中的任何相关数据获取的各种控制操作员(情感,时态,形式,关键字等)。实验表明,在我们的方法中构成这些操作员可以生成或编辑高质量文本,从而在发电质量和效率方面显着改善了以前的方法。
translated by 谷歌翻译
数据增强是解决过度合适的有效方法。许多以前的作品提出了针对NLP的不同数据增强策略,例如注入噪声,单词更换,反向翻译等。虽然有效,但它们错过了语言的一个重要特征 - 复杂性,复杂表达的含义是由其子构建的部分。在此激励的情况下,我们提出了一种称为Treemix的自然语言理解的组成数据增强方法。具体而言,Treemix利用选区解析树将句子分解为组成型子结构和混合数据增强技术以重组它们以生成新的句子。与以前的方法相比,Treemix引入了更大的多样性,并鼓励模型学习NLP数据的组成性。关于文本分类和扫描的广泛实验表明,Treemix优于当前最新数据增强方法。
translated by 谷歌翻译
专家混合物(MOE)由于其成功提高了模型质量,特别是在变压器方面的成功而变得流行。通过向几个专家提供稀疏门的令牌,每个专家只包含完整模型的一部分,Moe将模型尺寸保持不变,并且显着降低了每次标记计算,从而有效地缩放神经网络。但是,我们发现,目前的联合训练专家和稀疏门的方法引入了对模型精度的负面影响,缩短了昂贵的大规模模型训练的效率。在这项工作中,我们提出了用于MOE训练的密集至稀疏的门(DTS-Gate)。具体而言,代替使用永久稀疏门,DTS-Gate开始作为向所有专家路由令牌的密集栅极开始,然后逐渐和自适应地成为稀疏,而路线较少到更少的专家。与DTS-Gate的Moe自然地通过培训所有专家训练专家和稀疏门的训练,然后学习稀疏门。实验表明,与GPT-MOE(1.5B)模型中的最先进的开关门相比,使用OpenWeBtext数据集(40GB),DTS-Gate可以获得2.0倍的加速以达到相同的验证困惑,如以及更高的拖鞋 - 效率为1.42倍的加速。
translated by 谷歌翻译
In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning at search time. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. Compared to the state-of-the-art pruning methods, we have demonstrated superior performances on Mo-bileNet V1/V2 and ResNet. Codes are available on https: //github.com/liuzechun/MetaPruning. This work is done when Zechun Liu and Haoyuan Mu are interns at Megvii Technology.
translated by 谷歌翻译
This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.
translated by 谷歌翻译
This paper investigates the problem of Named Entity Recognition (NER) for extreme low-resource languages with only a few hundred tagged data samples. NER is a fundamental task in Natural Language Processing (NLP). A critical driver accelerating NER systems' progress is the existence of large-scale language corpora that enable NER systems to achieve outstanding performance in languages such as English and French with abundant training data. However, NER for low-resource languages remains relatively unexplored. In this paper, we introduce Mask Augmented Named Entity Recognition (MANER), a new methodology that leverages the distributional hypothesis of pre-trained masked language models (MLMs) for NER. The <mask> token in pre-trained MLMs encodes valuable semantic contextual information. MANER re-purposes the <mask> token for NER prediction. Specifically, we prepend the <mask> token to every word in a sentence for which we would like to predict the named entity tag. During training, we jointly fine-tune the MLM and a new NER prediction head attached to each <mask> token. We demonstrate that MANER is well-suited for NER in low-resource languages; our experiments show that for 100 languages with as few as 100 training examples, it improves on state-of-the-art methods by up to 48% and by 12% on average on F1 score. We also perform detailed analyses and ablation studies to understand the scenarios that are best-suited to MANER.
translated by 谷歌翻译
Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse via structural re-parameterization technique. A novel hardware-efficient RepGhost module is proposed for implicit feature reuse via re-parameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and RepGhostNet. Experiments on ImageNet and COCO benchmarks demonstrate that the proposed RepGhostNet is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our RepGhostNet surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile phone.
translated by 谷歌翻译
通过快速梯度符号方法(FGSM)生成的样品(也称为FGSM-AT)生成的样品是一种计算上的简单方法,可以训练训练强大的网络。然而,在训练过程中,在Arxiv:2001.03994 [CS.LG]中发现了一种不稳定的“灾难性过度拟合”模式,在单个训练步骤中,强大的精度突然下降到零。现有方法使用梯度正规化器或随机初始化技巧来减轻此问题,而它们要么承担高计算成本或导致较低的稳健精度。在这项工作中,我们提供了第一项研究,该研究从三个角度彻底研究了技巧的集合:数据初始化,网络结构和优化,以克服FGSM-AT中的灾难性过度拟合。令人惊讶的是,我们发现简单的技巧,即a)掩盖部分像素(即使没有随机性),b)设置较大的卷积步幅和平滑的激活功能,或c)正规化第一卷积层的重量,可以有效地应对过度拟合问题。对一系列网络体系结构的广泛结果验证了每个提出的技巧的有效性,还研究了技巧的组合。例如,在CIFAR-10上接受了PREACTRESNET-18培训,我们的方法对PGD-50攻击者的准确性为49.8%,并且针对AutoAttack的精度为46.4%,这表明Pure FGSM-AT能够启用健壮的学习者。代码和模型可在https://github.com/ucsc-vlaa/bag-of-tricks-for-for-fgsm-at上公开获得。
translated by 谷歌翻译
通过生成模型生成具有特定化学和生物学特性的新分子已成为药物发现的有希望的方向。但是,现有的方法需要大型数据集进行广泛的培训/微调,在现实世界中通常无法使用。在这项工作中,我们提出了一个新的基于检索的框架,用于可控分子生成。我们使用一系列的示例分子,即(部分)满足设计标准的分子,以引导预先训练的生成模型转向满足给定设计标准的合成分子。我们设计了一种检索机制,该机制将示例分子与输入分子融合在一起,该分子受到一个新的自我监督目标训练,该目标可以预测输入分子的最近邻居。我们还提出了一个迭代改进过程,以动态更新生成的分子和检索数据库,以更好地泛化。我们的方法不可知生成模型,不需要特定于任务的微调。关于从简单设计标准到设计与SARS-COV-2主蛋白酶结合的铅化合物的具有挑战性的现实世界情景的各种任务,我们证明了我们的方法外推出了远远超出检索数据库,并且比检索数据库更高,并且比更高的性能和更广泛的适用性以前的方法。
translated by 谷歌翻译
在这项比赛中,参与者将使用时间序列数据在教育背景下解决机器学习的两个基本因果挑战。首先是确定不同构造之间的因果关系,其中构造被定义为学习的最小要素。第二个挑战是预测学习一个结构对回答其他结构问题的能力的影响。应对这些挑战将使学生的知识获取优化,这可以部署在影响数百万学生的真正的edtech解决方案中。参与者将在理想化的环境中运行这些任务,并具有合成数据和现实情况,并通过一系列A/B测试收集的评估数据。
translated by 谷歌翻译